Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
debakarr
GitHub Repository: debakarr/machinelearning
Path: blob/master/Part 8 - Deep Learning/Artificial Neural Networks/[Python] Artificial Neural Network.ipynb
1009 views
Kernel: Python 3

Artificial Neural Network(ANN)

I prefer using Google Colaboratory, as while training the model on training dataset it takes time if you want to generate for say 100 epochs, on your normal setup (While writing this code I am actually using my Laptop, instead of PC). It's better to use Colaboratory, as because it is fast.

Data Preprocessing

# Importing the libraries import numpy as np import matplotlib.pyplot as plt import pandas as pd %matplotlib inline plt.rcParams['figure.figsize'] = [14, 8]
# Importing the dataset dataset = pd.read_csv('https://github.com/Dibakarroy1997/machinelearning/raw/master/Part%208%20-%20Artificial%20Neural%20Networks/Churn_Modelling.csv')
dataset.head(1)
X = dataset.iloc[:, [3, 4, 6, 7, 8, 9, 10, 11, 12]].values
y = dataset.iloc[:, 13].values
X[0]
array([619, 'France', 42, 2, 0.0, 1, 1, 1, 101348.88], dtype=object)
y[0]
1
# Encoding categorical data from sklearn.preprocessing import LabelEncoder, OneHotEncoder labelencoder_X = LabelEncoder() X[:, 1] = labelencoder_X.fit_transform(X[:, 1]) onehotencoder = OneHotEncoder(categorical_features = [1]) X = onehotencoder.fit_transform(X).toarray() X = X[:, 1:]
X[0:2]
array([[ 0.00000000e+00, 0.00000000e+00, 6.19000000e+02, 4.20000000e+01, 2.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.00000000e+00, 1.00000000e+00, 1.01348880e+05], [ 0.00000000e+00, 1.00000000e+00, 6.08000000e+02, 4.10000000e+01, 1.00000000e+00, 8.38078600e+04, 1.00000000e+00, 0.00000000e+00, 1.00000000e+00, 1.12542580e+05]])
# Splitting the dataset into the Training set and Test set from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Feature Scaling from sklearn.preprocessing import StandardScaler sc = StandardScaler() X_train = sc.fit_transform(X_train) X_test = sc.transform(X_test)
X_train[0:2]
array([[-0.5698444 , 1.74309049, 0.16958176, -0.46460796, 0.00666099, -1.21571749, 0.8095029 , 0.64259497, -1.03227043, 1.10643166], [ 1.75486502, -0.57369368, -2.30455945, 0.30102557, -1.37744033, -0.00631193, -0.92159124, 0.64259497, 0.9687384 , -0.74866447]])

Importing the Keras libraries and packages

# !pip install keras import keras
from keras.models import Sequential from keras.layers import Dense

Initialization of ANN

classifier = Sequential()
dir(keras.activations)
['K', 'Layer', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'absolute_import', 'deserialize', 'deserialize_keras_object', 'elu', 'get', 'hard_sigmoid', 'linear', 'relu', 'selu', 'serialize', 'sigmoid', 'six', 'softmax', 'softplus', 'softsign', 'tanh', 'warnings']

Adding the input layer and the first hidden layer

classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 10))

Adding more hidden layer(s) inbetween

# Adding the second hidden layer classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))

Adding the output layer

classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid', input_dim = 10))
dir(keras.optimizers)
['Adadelta', 'Adagrad', 'Adam', 'Adamax', 'K', 'Nadam', 'Optimizer', 'RMSprop', 'SGD', 'TFOptimizer', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'absolute_import', 'adadelta', 'adagrad', 'adam', 'adamax', 'clip_norm', 'copy', 'deserialize', 'deserialize_keras_object', 'get', 'interfaces', 'nadam', 'rmsprop', 'serialize', 'serialize_keras_object', 'sgd', 'six', 'tf', 'zip']
dir(keras.losses)
['K', 'KLD', 'MAE', 'MAPE', 'MSE', 'MSLE', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', 'absolute_import', 'binary_crossentropy', 'categorical_crossentropy', 'categorical_hinge', 'cosine', 'cosine_proximity', 'deserialize', 'deserialize_keras_object', 'get', 'hinge', 'kld', 'kullback_leibler_divergence', 'logcosh', 'mae', 'mape', 'mean_absolute_error', 'mean_absolute_percentage_error', 'mean_squared_error', 'mean_squared_logarithmic_error', 'mse', 'msle', 'poisson', 'serialize', 'serialize_keras_object', 'six', 'sparse_categorical_crossentropy', 'squared_hinge']

Compiling the ANN

classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

Fitting the ANN to the Training set

classifier.fit(X_train, y_train, batch_size = 10, epochs = 100)
Epoch 1/100 8000/8000 [==============================] - 1s 174us/step - loss: 0.4840 - acc: 0.7960 Epoch 2/100 8000/8000 [==============================] - 1s 137us/step - loss: 0.4323 - acc: 0.7960 Epoch 3/100 8000/8000 [==============================] - 1s 148us/step - loss: 0.4270 - acc: 0.7995 Epoch 4/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4227 - acc: 0.8196 Epoch 5/100 8000/8000 [==============================] - 1s 145us/step - loss: 0.4205 - acc: 0.8274 Epoch 6/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4186 - acc: 0.8310 Epoch 7/100 8000/8000 [==============================] - 1s 153us/step - loss: 0.4170 - acc: 0.8317 Epoch 8/100 8000/8000 [==============================] - 1s 146us/step - loss: 0.4154 - acc: 0.8340 Epoch 9/100 4620/8000 [================>.............] - ETA: 0s - loss: 0.4058 - acc: 0.83518000/8000 [==============================] - 1s 152us/step - loss: 0.4147 - acc: 0.8329 Epoch 10/100 8000/8000 [==============================] - 1s 148us/step - loss: 0.4139 - acc: 0.8327 Epoch 11/100 8000/8000 [==============================] - 1s 149us/step - loss: 0.4130 - acc: 0.8334 Epoch 12/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4124 - acc: 0.8329 Epoch 13/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4118 - acc: 0.8322 Epoch 14/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4109 - acc: 0.8322 Epoch 15/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4106 - acc: 0.8344 Epoch 16/100 8000/8000 [==============================] - 1s 145us/step - loss: 0.4098 - acc: 0.8339 Epoch 17/100 6340/8000 [======================>.......] - ETA: 0s - loss: 0.4051 - acc: 0.83398000/8000 [==============================] - 1s 145us/step - loss: 0.4097 - acc: 0.8329 Epoch 18/100 8000/8000 [==============================] - 1s 145us/step - loss: 0.4098 - acc: 0.8326 Epoch 19/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4094 - acc: 0.8342 Epoch 20/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4091 - acc: 0.8325 Epoch 21/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4090 - acc: 0.8327 Epoch 22/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4085 - acc: 0.8336 Epoch 23/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4086 - acc: 0.8354 Epoch 24/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4084 - acc: 0.8331 Epoch 25/100 6420/8000 [=======================>......] - ETA: 0s - loss: 0.4086 - acc: 0.83498000/8000 [==============================] - 1s 143us/step - loss: 0.4082 - acc: 0.8347 Epoch 26/100 8000/8000 [==============================] - 1s 146us/step - loss: 0.4083 - acc: 0.8341 Epoch 27/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4070 - acc: 0.8330 Epoch 28/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4083 - acc: 0.8337 Epoch 29/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4075 - acc: 0.8332 Epoch 30/100 8000/8000 [==============================] - 1s 145us/step - loss: 0.4072 - acc: 0.8336 Epoch 31/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4077 - acc: 0.8350 Epoch 32/100 8000/8000 [==============================] - 1s 147us/step - loss: 0.4074 - acc: 0.8332 Epoch 33/100 5520/8000 [===================>..........] - ETA: 0s - loss: 0.4043 - acc: 0.83198000/8000 [==============================] - 1s 150us/step - loss: 0.4072 - acc: 0.8325 Epoch 34/100 8000/8000 [==============================] - 1s 150us/step - loss: 0.4070 - acc: 0.8355 Epoch 35/100 8000/8000 [==============================] - 1s 146us/step - loss: 0.4068 - acc: 0.8327 Epoch 36/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4075 - acc: 0.8342 Epoch 37/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4067 - acc: 0.8335 Epoch 38/100 8000/8000 [==============================] - 1s 138us/step - loss: 0.4069 - acc: 0.8325 Epoch 39/100 8000/8000 [==============================] - 1s 149us/step - loss: 0.4071 - acc: 0.8337 Epoch 40/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4068 - acc: 0.8325 Epoch 41/100 5790/8000 [====================>.........] - ETA: 0s - loss: 0.4083 - acc: 0.83208000/8000 [==============================] - 1s 141us/step - loss: 0.4069 - acc: 0.8336 Epoch 42/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4071 - acc: 0.8329 Epoch 43/100 8000/8000 [==============================] - 1s 145us/step - loss: 0.4068 - acc: 0.8340 Epoch 44/100 8000/8000 [==============================] - 1s 148us/step - loss: 0.4068 - acc: 0.8335 Epoch 45/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4062 - acc: 0.8347 Epoch 46/100 8000/8000 [==============================] - 1s 139us/step - loss: 0.4066 - acc: 0.8330 Epoch 47/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4063 - acc: 0.8342 Epoch 48/100 8000/8000 [==============================] - 1s 140us/step - loss: 0.4059 - acc: 0.8339 Epoch 49/100 5820/8000 [====================>.........] - ETA: 0s - loss: 0.4065 - acc: 0.83408000/8000 [==============================] - 1s 138us/step - loss: 0.4060 - acc: 0.8337 Epoch 50/100 8000/8000 [==============================] - 1s 140us/step - loss: 0.4062 - acc: 0.8342 Epoch 51/100 8000/8000 [==============================] - 1s 139us/step - loss: 0.4065 - acc: 0.8336 Epoch 52/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4063 - acc: 0.8332 Epoch 53/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4060 - acc: 0.8346 Epoch 54/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4063 - acc: 0.8339 Epoch 55/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4061 - acc: 0.8340 Epoch 56/100 8000/8000 [==============================] - 1s 147us/step - loss: 0.4064 - acc: 0.8342 Epoch 57/100 6370/8000 [======================>.......] - ETA: 0s - loss: 0.4051 - acc: 0.83478000/8000 [==============================] - 1s 142us/step - loss: 0.4063 - acc: 0.8342 Epoch 58/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4055 - acc: 0.8345 Epoch 59/100 8000/8000 [==============================] - 1s 145us/step - loss: 0.4063 - acc: 0.8331 Epoch 60/100 8000/8000 [==============================] - 1s 145us/step - loss: 0.4062 - acc: 0.8334 Epoch 61/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4065 - acc: 0.8326 Epoch 62/100 8000/8000 [==============================] - 1s 151us/step - loss: 0.4060 - acc: 0.8340 Epoch 63/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4062 - acc: 0.8344 Epoch 64/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4061 - acc: 0.8339 Epoch 65/100 6530/8000 [=======================>......] - ETA: 0s - loss: 0.4066 - acc: 0.83528000/8000 [==============================] - 1s 140us/step - loss: 0.4059 - acc: 0.8357 Epoch 66/100 8000/8000 [==============================] - 1s 138us/step - loss: 0.4060 - acc: 0.8354 Epoch 67/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4058 - acc: 0.8322 Epoch 68/100 8000/8000 [==============================] - 1s 139us/step - loss: 0.4059 - acc: 0.8349 Epoch 69/100 8000/8000 [==============================] - 1s 139us/step - loss: 0.4054 - acc: 0.8351 Epoch 70/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4057 - acc: 0.8340 Epoch 71/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4059 - acc: 0.8341 Epoch 72/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4055 - acc: 0.8347 Epoch 73/100 6810/8000 [========================>.....] - ETA: 0s - loss: 0.4021 - acc: 0.83558000/8000 [==============================] - 1s 142us/step - loss: 0.4057 - acc: 0.8336 Epoch 74/100 8000/8000 [==============================] - 1s 140us/step - loss: 0.4050 - acc: 0.8329 Epoch 75/100 8000/8000 [==============================] - 1s 140us/step - loss: 0.4061 - acc: 0.8326 Epoch 76/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4058 - acc: 0.8345 Epoch 77/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4060 - acc: 0.8347 Epoch 78/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4055 - acc: 0.8341 Epoch 79/100 8000/8000 [==============================] - 1s 139us/step - loss: 0.4061 - acc: 0.8336 Epoch 80/100 8000/8000 [==============================] - 1s 138us/step - loss: 0.4055 - acc: 0.8331 Epoch 81/100 7470/8000 [===========================>..] - ETA: 0s - loss: 0.4054 - acc: 0.83368000/8000 [==============================] - 1s 143us/step - loss: 0.4057 - acc: 0.8341 Epoch 82/100 8000/8000 [==============================] - 1s 140us/step - loss: 0.4056 - acc: 0.8334 Epoch 83/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4058 - acc: 0.8335 Epoch 84/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4061 - acc: 0.8346 Epoch 85/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4055 - acc: 0.8340 Epoch 86/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4060 - acc: 0.8344 Epoch 87/100 8000/8000 [==============================] - 1s 141us/step - loss: 0.4056 - acc: 0.8357 Epoch 88/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4056 - acc: 0.8339 Epoch 89/100 6830/8000 [========================>.....] - ETA: 0s - loss: 0.4063 - acc: 0.83198000/8000 [==============================] - 1s 142us/step - loss: 0.4056 - acc: 0.8326 Epoch 90/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4058 - acc: 0.8347 Epoch 91/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4055 - acc: 0.8342 Epoch 92/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4058 - acc: 0.8349 Epoch 93/100 8000/8000 [==============================] - 1s 144us/step - loss: 0.4057 - acc: 0.8344 Epoch 94/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4055 - acc: 0.8342 Epoch 95/100 8000/8000 [==============================] - 1s 147us/step - loss: 0.4049 - acc: 0.8337 Epoch 96/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4057 - acc: 0.8344 Epoch 97/100 6480/8000 [=======================>......] - ETA: 0s - loss: 0.4054 - acc: 0.83448000/8000 [==============================] - 1s 142us/step - loss: 0.4054 - acc: 0.8334 Epoch 98/100 8000/8000 [==============================] - 1s 142us/step - loss: 0.4056 - acc: 0.8332 Epoch 99/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4059 - acc: 0.8345 Epoch 100/100 8000/8000 [==============================] - 1s 143us/step - loss: 0.4056 - acc: 0.8334
<keras.callbacks.History at 0x7f43dd4f69b0>

Predicting the Test set results

y_pred = classifier.predict(X_test) y_pred = (y_pred > 0.5)
y_pred[0:10]
array([[False], [False], [False], [False], [False], [ True], [False], [False], [False], [ True]], dtype=bool)
y_test[0:10]
array([0, 1, 0, 0, 0, 1, 0, 0, 1, 1])

Making the confussion Matrix

from sklearn.metrics import confusion_matrix cm = confusion_matrix(y_test, y_pred) cm
array([[1548, 47], [ 278, 127]])

Calculating Accuracy

(1516+193)/np.sum(cm)
0.85450000000000004